Видео с ютуба Neural Network Vulnerabilities
Deep Learning's Most Dangerous Vulnerability: Adversarial Attacks at Silicon Valley Code Camp 2019
Model Stealing Attacks Against Inductive Graph Neural Networks
Visualizing the Impact of Adversarial Attacks on Perception in Convolutional Neural Networks
32 - Neural Network Assisted Fuzzing - Discovering Software Vulnerabilities Using Deep Learning
Exploring Vulnerabilities in Spiking Neural Networks: Direct Adversarial Attacks on Raw Event Data
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
USENIX Security '24 - Hijacking Attacks against Neural Network by Analyzing Training Data
Model Stealing Attacks Against Inductive Graph Neural Networks
Security Vulnerabilities in Machine Learning
Adversarial Attacks on AI: Impact and Defenses
Graph Neural Network-based Vulnerability Predication
USENIX Security '22 - Inference Attacks Against Graph Neural Networks
VCodeDet: a Graph Neural Network for Source Code Vulnerability Detection
Evasion Attacks on Neural Networks
Adversarial Attacks On Machine Learning-Based Malware Detection Systems
Visual Analytics of Neuron Vulnerability to Adversarial Attacks on Convolutional Neural Networks
Deep Learning Based Vulnerability Detection : Are We There Yet?
LineVD: Statement-level Vulnerability Detection using Graph Neural Networks
MVD: Memory-Related Vulnerability Detection Based on Flow-Sensitive Graph Neural Networks
[ACM MTD workshop 2021] Using Honeypots to Catch Adversarial Attacks on Neural Networks